technovangelist / scripts / this may be my favorite simple ollama gui

this may be my favorite simple ollama gui

When working with Ollama, some folks really like to have a UI. A user interface that helps them, that guides them through using artificial intelligence, and that is not on the command line. Recently, I did a video about another user interface called Open Web UI. It has made some interesting choices about how it works, but installation can be a little daunting. It uses Docker so you need to have docker installed and configured. And that can be hard depending on where your experience lies. So let’s look at a different tool that does just the basics and does them really well.

Before I get into it, I like to remind folks that while I was a founding member of the Ollama team, I am no longer part of that team. Everything I say here is purely my own opinion.

At first I thought it was some reference to Mystery Science Theatre, but the authors confirmed with me that it is just misty. It’s a simple download for Windows, Linux and Mac. In fact they have downloads for most of the variations you are likely to need. When you first install it, you’ll see a choice. Do you want to work with models locally, or do you want to work with models that are up in the cloud. For the cloud models, enter an API key to get started. But for local models, just press enter and it’ll download a model if you don’t have anything, or you can start working with the models that you have. It assume’s Ollama is running on your local machine, though they are going to be adding the ability to work from a remote machine soon.

So here I am in … MSTY. I can choose a model from this dropdown and ask a question. why is the sky blue. and we get a nice answer. Now check out these buttons below the answer. We can edit the answer, in case you want to influence the context as you continue to work with the model. We can try to regenerate the answer. And then we quickly get into some of the crazy things we can do with this ‘simple’ UI. Every time you work with a model, you are likely to get a different answer to the same question. So if i regenerate 3 times, I can now use these arrows to go back and forth between each. And each answer is now a new branch added to our conversation. And so a simple chat in most apps becomes this potentially more complex time traveling experience. It reminds me a bit of a movie with Gwyneth Paltrow called Sliding Doors that looks at two timelines that diverge when the character misses a train on the london underground.

MSTY is certainly not the first to do this, but they do it so nicely and cleanly.

So I can ask a follow on question in one branch and a different question in another. and those arrows change the whole branch. I really wish there was some visualization here where I could see all the branches, much like what we see in so many git-based tools.

We can also click this button to branch the conversation. Though I don’t know if this is the best term for what is happening here. It gets a bit confusing. Its actually copying the conversation to a new conversation in the left sidebar. Whereas regenerating creates a branch in the current view. And the conversations in that sidebar all have the same name, so it’s hard to tell which is which. Open WebUI had an interesting feature where it would use another model to name the conversation automatically.

let’s go back to the edit screen for our last question. This gets a lot more interesting when you look at the icons at the bottom of the edit box. Now we can update what we asked by replacing it with a prompt from the library. This is a really cool idea, but i wish I had more control here. Maybe I don’t want to replace but rather add to my existing prompt. I can do that by clicking manage and copy and pasting the prompts, but then I have to do the work. I can also do some branch activities.

Try editing an answer. and then choose the refine button. Now I can choose from a list of refinements and regenerate the answer. I think there is a lot that you can do here with these.

Try creating a new prompt. and click on the quick prompts button. We see a bunch of prompts to get us going. You can choose any of these and then edit to make it right for you and press enter. There are also some prompts that offer something like the variables in Open WebUI, but they seem to be better implemented here in MSTY. Try the SEO one. And the variables here, indicated by the word or phrase in curly brackets, is highlighted at the top of the box. click it to enter the value. And I can also choose to refine the prompt using those refinements. this can be a lot of fun.

whenever you list the prompts or refinements, you can choose to manage them. you can also access this on the left sidebar. At first when I looked at this, i couldn’t understand what was going on. I saw AI assisted doctor, but then couldn’t get it to add to the quick prompts dropdown. At the top left you can see there are three categories of prompts. System, user, and refine. Each entry has a weenie little dot with a color that is easy to miss indicating which each of these prompts fall under. AI Assisted Doctor is a system prompt. We haven’t used those yet. So click on User at the top and we see all the user prompts that we have seen in the quick prompts list.

For each item, we can see the prompt, along with a sample output and some tags that describe this prompt. Under refine we can see all the refinements, and the sample inputs and outputs.

So lets take a look at using those system prompts. Create a new chat and then click the text in the middle of the screen and we can set a system prompt. This is great. I wish there was a way to set these things and then save it as a new model.

Speaking of models, there is some minimal model management here. They have cards for most if not all the models. I am not sure if these are manually added, or if they are using the API to pull the list from ollama.com which I have demonstrated how to do in the past. But for the cards they have, they make it easy to download different variants, though I assume they are only offering the ‘latest’ tag, so q4. If you want a different quant, you will have to pull them from the cli, but then they show up here in msty.

When you have a few models, you can add a split chat. Although it looks like there are different prompt text boxes, the chats are in sync. so when you press enter, both models will be asked the question. This is a great way to compare how different models perform so you can come up with a decision about which is best for you. I love this. There are a few other places where this split chat interface comes up.

There are some app settings you can set but not much other than adding api keys for other services. There is no RAG here, at least not yet. Though if that is in their future, I am excited to see how they will implement it. They are tackling a small set of key features in the current version and what they are doing they are doing very well. The whole branching thing is great, though I look forward to the refinements that surely will come in the future. Yes, other tools do it too, but none achieve it in the same elegant way as what we see here.

As for annoyances? Yes, there are a few. I hate when I open a dialog, I have to click the x to close rather than just clicking away. And I really hate that I have to use the mouse and click icons, rather than have all the keyboard shortcuts like they have in OpenWebUI. I would love to be able to use the @ sign to choose a model, or maybe / for something or pipe for something else. I really wish there was some visualization for branching and understand how conversations relate to each other. And I wish I could save models to quickly bring up later. There are folders on the side, but its hard to know when I would want to use them. It does have the ability to do speech to text, but you have to enter an openai api key and even then I couldn’t get it to work. The edit window is a bit confusing. Do i refine or choose a prompt, you can do both, I think you should only be able to do one depending on where you are.

So yeah, there are problems, but this is a really new app. There have been 7 releases. It’s amazing that they have refined their approach so much in such a short time. If you don’t care about RAG or are using a different tool for that, I think MSTY is definitely the best option I have seen so far.

What do you think? Have you worked with msty? Do you like it? Is there another tool you think works better for you? Let me know in the comments below. Thanks so much for being here. goodbye